Alumni
Diversity
Creating Belonging through the Power of Language
By
“We’re all used to hearing ‘hey guys’ in a meeting, but what if I said ‘hey gals’? It should feel the same, but it doesn’t. So, what if we said ‘hey everyone’ instead?”
Yeti Khim, an MBA candidate and co-founder of the startup Inclusively.ai, posed this question to the audience at the MIT $100K Entrepreneurship Competition Launch event in May. The startup, which includes MBA candidate and co-founder Priya Bhasin, and MBA candidate and team member Joyce Chen, took second place at the annual contest.
“This is an issue that even our team struggles with,” Khim continued. “We understand what it feels like to be excluded, and we’re here to build a world where everyone feels invited to the conversation.”
As Khim, Bhasin, and Chen went on to explain, Inclusively.ai wants to help people and organizations create and maintain environments that foster diversity, equity, and inclusion (DEI) through interpersonal communication. How? With a new proprietary application built on advances in artificial intelligence (AI) and natural language processing (NLP). By detecting bias in daily correspondence and offering suggestions for more inclusive language, the group believes they can accomplish this goal and much more.
“There's a lot of emphasis on trainings or workshops in companies, but many managers are still struggling,” says Bhasin. “They’re still trying to figure out how to be more inclusive, or how to help teams and organizations be more inclusive. That’s the pain point, and I think that’s the core insight we’ve learned throughout this process. We want to make DEI more actionable.”
The aha moment
If Inclusively.ai sounds like the cloud-based typing assistant Grammarly, that’s because Bhasin was inspired by the popular proofreading application during the Hack for Inclusion event “Unlocking Human Capital” in April.
Bhasin and Khim were randomly paired together to address the “Retaining Diverse Talent” session sponsored by Bain & Company. Participants were challenged with developing methods for ensuring that newly created talent pipelines to recruit and retain a more diverse workforce could weather the great resignation.
“We proposed something else for the Hackathon,” says Bhasin, “but the day before, I thought, ‘What if we could make something like Grammarly but for detecting bias?’”
As Bhasin later explained to the MIT $100K judges, while “Grammarly is trained on documents for things like spelling corrections,” Inclusively.ai is “tackling personal communication and specifically extracting sentences that are toxic” with a far more nuanced and dynamic dataset. Specifically, they are employing powerful deep learning methods to create an original enterprise language neutrality engine.
Amid her and Khim’s final preparations for the Hackathon demonstration, Bhasin emailed her Hands-On Deep Learning professor Rama Ramakrishnan, SM ’90, PhD ’94, (Professor of the Practice, Data Science, and Applied Machine Learning) for feedback. He responded with examples of the kind of code she would need to write to make Inclusively.ai a reality.
“The idea is very timely,” says Ramakrishnan. “The focus on bias and toxicity in language has never been greater and there’s a very clear and rising need for language detox and debiasing products. And given recent advances in AI NLP, it is now technologically possible to build robust products to do this that can work at scale.”
Bhasin and Khim included an early version of what would eventually become Inclusively.ai in their Hackathon pitch. When Khim delivered the final presentation, they realized the potential power of what they were proposing.
“The whole crowd gasped and it was an ‘aha’ moment for us,” Bhasin recalls. “When I heard that gasp, I felt validated. It was a signal to us that this could be something beyond Hackathon.”
A safe sandbox
After wowing the Hackathon crowd, Bhasin and Khim decided to keep pursuing Inclusively.ai.
The deadline for applying to MIT delta v at the Martin Trust Center for MIT Entrepreneurship had already passed, but there were still a few weeks left to apply for MIT $100K. For additional help with coding and business development, Bhasin reached out to Chen, her classmate from the AI for Impact course taught by Sandy Pentland, PhD ’82 (Toshiba Professor of Media Arts and Sciences; Director of Media Lab Entrepreneurship Program).
The trio immediately set out outlining the startup’s mission, vision, and core values, as well as building and testing the product itself. They also reached out to Ramakrishnan, Dev Bala, EMBA ’20, and Reza Rahaman, SM ’85, PhD ’89, (Bernard M. Gordon Industry Co-Director; Senior Lecturer, MIT Technical Leadership Program) to serve as advisors. Ramakrishnan is assisting with matters of AI and NLP, while Bala and Rahaman are contributing to product development and DEI, respectively.
“As a non-white, gay, Muslim man, this subject is deeply personal to me,” says Rahaman. “The passion that Priya, Yeti, and Joyce bring to diversity, equity, inclusion, and belonging in general—and at Inclusively.ai in particular—gives me energy and validates something I care deeply about.”
Much of their early and ongoing efforts emphasized the cultivation of a growth mindset, both for the Inclusively.ai team and its target market. Doing so, Chen explains, would allow them to create a “safe sandbox” in which their users could freely and comfortably learn and apply AI-powered lessons in inclusivity.
“By focusing on interpersonal communication, we can help people become more aware of the words they’re using,” says Chen. “As a result, they can participate differently in their professional and personal lives. There’s a ripple effect to teaching inclusiveness, and with the technology we’re using, we can kickstart it within a person’s email or chat application.”
Khim agrees, adding that it is important to empathize with others and trust that they have good intentions.
“We don’t want to write someone off just because they said something that was a microaggression. If we do, then we really can’t move forward together,” she says. “Having that trust and positive intent to grow with each other is super important for us to really achieve what we’re aspiring to do.”
Creating belonging
What Inclusively.ai aspires to do is amplify a sense of belonging through the power of everyday language. They hope to foster feelings influential enough to transform entire communities for the better of organizations and the ecosystems they inhabit.
“I’ve been on teams where I was the only woman engineer, and when I transitioned into leadership roles, I was still the only woman in the room,” says Bhasin. “In those situations, you’re often not treated in the same way as everyone else.”
On the one hand, an AI-enhanced coach for identifying, counseling against, and offering more inclusive alternatives to biased, toxic language may not seem powerful enough to create belonging. On the other hand, incremental lessons on DEI through a Grammarly for bias detection has the potential to ripple throughout the professional and personal lives of users.
“DEI doesn’t have to be about being ‘woke’ or hours upon hours of sensitivity training,” says Bala, head of platform at Discord and professional advisor at the Trust Center. “Meaningful change is the result of several small, repeated investments in improvement, and that’s the approach Inclusively.ai is taking—one email, DM, and message at a time.”
Inclusively.ai recently launched a Slack bot as their beta pilot and is welcoming individuals and teams to try out the product. For more information, visit their website or reach out directly.